258 research outputs found
Electron and Photon Interactions in the Regime of Strong LPM Suppression
Most searches for ultra-high energy (UHE) astrophysical neutrinos look for
radio emission from the electromagnetic and hadronic showers produced in their
interactions. The radio frequency spectrum and angular distribution depend on
the shower development, so are sensitive to the interaction cross sections. At
energies above about 10^{16} eV (in ice), the Landau-Pomeranchuk-Migdal (LPM)
effect significantly reduces the cross sections for the two dominant
electromagnetic interactions: bremsstrahlung and pair production. At higher
energies, above about 10^{20} eV, the photonuclear cross section becomes larger
than that for pair production, and direct pair production and electronuclear
interactions become dominant over bremsstrahlung. The electron interaction
length reaches a maximum around 10^{21} eV, and then decreases slowly as the
electron energy increases further. In this regime, the growth in the photon
cross section and electron energy loss moderates the rise in nu_e shower
length, which rises from ~10 m at 10^{16} eV to ~50 m at 10^{19} eV and ~100 m
at 10^{20} eV, but only to ~1 km at 10^{24} eV. In contrast, without
photonuclear and electronuclear interactions, the shower length would be over
10 km at 10^{24} eV.Comment: 10 pages, 9 figures. Submitted to Physical Review
The Test-Retest Reliability And Minimal Detectable Change In The Modified Fresno Test In Doctor Of Physical Therapy Students
Background: The American Physical Therapy Association identified the need for training in evidence based practice (EBP) and set forth guidelines for doctor of physical therapy (DPT) curricula to educate practitioners who are efficient and critical users of best evidence. Since DPT programs are teaching EBP, educators need an assessment tool to evaluate the competence of students. The Modified Fresno Test (MFT) of EBP was validated for physical therapists and the test-retest reliability and minimal detectable change (MDC) has been found for first year DPT students.
Objective: The purpose is to determine the test-retest reliability and MDC of the MFT in first, second, and third year DPT students. A secondary purpose is to compare the mean total score of the MFT among the three student groups.
Design: Test-retest design
Methods: Using a simple random sample, we recruited 21 University of New England (UNE) DPT students from each of the three classes. The participants completed the MFT twice, separated by 14 days, in a classroom on UNE\u27s campus.
Results: Students in the third year class completed the validated 13-item MFT and due to a photocopying error, students in the first and second year class completed an 11-item MFT. The first year students had the lowest 11-item MFT mean score (68.5 points) which was significantly lower than the second and third year student groups (85.7 and 88.2 points, respectively). First year students had the lowest ICC and highest MDC (0.23 and 40.4 points). Third year students had the highest ICC and lowest MDC (0.73 and 23.0 points).
Limitations: We were unable to analyze scores from the 13-item MFT for all three student groups. The rater did not receive training in the MFT scoring rubric. Participation in the study was not a requirement.
Conclusions: The 13 and 11-item MFT has good test-retest reliability for UNE\u27s third year DPT student group. The 11-item MFT has poor to moderate test-retest reliability for first and second year DPT students
Test-Retest Reliability And Minimal Detectable Change Of The Modified Fresno Test Of Evidence Based Practice In DPT Students
Poster presenting a research study whose purpose was to determine the test-retest reliability and MDC of the MFT in first, second, and third year DPT students, and to compare the mean total score of the MFT among the three student groups. Using a simple random sample, 21 students were recruited from each of the three UNE DPT classes. The participants completed the MFT twice, separated by 14 days, in a UNE classroom. Students in the third year class completed the validated 13-item MFT and, due to a photocopying error, students in the first and second year class completed an 11-item MFT. The first year students had the lowest 11-item MFT mean score (68.5 points) which was significantly lower than the second and third year student groups (85.7 and 88.2 points, respectively). First year students had the lowest ICC and highest MDC (0.23 and 40.4 points). Third year students had the highest ICC and lowest MDC (0.73 and 23.0 points). The 13 and 11-item MFT has good test-retest reliability for UNE\u27s third year DPT student group. The 11-item MFT has poor to moderate test-retest reliability for first and second year DPT students.https://dune.une.edu/pt_studrrposter/1000/thumbnail.jp
Accelerating Large-Scale Data Analysis by Offloading to High-Performance Computing Libraries using Alchemist
Apache Spark is a popular system aimed at the analysis of large data sets,
but recent studies have shown that certain computations---in particular, many
linear algebra computations that are the basis for solving common machine
learning problems---are significantly slower in Spark than when done using
libraries written in a high-performance computing framework such as the
Message-Passing Interface (MPI).
To remedy this, we introduce Alchemist, a system designed to call MPI-based
libraries from Apache Spark. Using Alchemist with Spark helps accelerate linear
algebra, machine learning, and related computations, while still retaining the
benefits of working within the Spark environment. We discuss the motivation
behind the development of Alchemist, and we provide a brief overview of its
design and implementation.
We also compare the performances of pure Spark implementations with those of
Spark implementations that leverage MPI-based codes via Alchemist. To do so, we
use data science case studies: a large-scale application of the conjugate
gradient method to solve very large linear systems arising in a speech
classification problem, where we see an improvement of an order of magnitude;
and the truncated singular value decomposition (SVD) of a 400GB
three-dimensional ocean temperature data set, where we see a speedup of up to
7.9x. We also illustrate that the truncated SVD computation is easily scalable
to terabyte-sized data by applying it to data sets of sizes up to 17.6TB.Comment: Accepted for publication in Proceedings of the 24th ACM SIGKDD
International Conference on Knowledge Discovery and Data Mining, London, UK,
201
The use of Convolutional Neural Networks for signal-background classification in Particle Physics experiments
The success of Convolutional Neural Networks (CNNs) in image classification
has prompted efforts to study their use for classifying image data obtained in
Particle Physics experiments. Here, we discuss our efforts to apply CNNs to 2D
and 3D image data from particle physics experiments to classify signal from
background.
In this work we present an extensive convolutional neural architecture
search, achieving high accuracy for signal/background discrimination for a HEP
classification use-case based on simulated data from the Ice Cube neutrino
observatory and an ATLAS-like detector. We demonstrate among other things that
we can achieve the same accuracy as complex ResNet architectures with CNNs with
less parameters, and present comparisons of computational requirements,
training and inference times.Comment: Contribution to Proceedings of CHEP 2019, Nov 4-8, Adelaide,
Australi
Matrix Factorization at Scale: a Comparison of Scientific Data Analytics in Spark and C+MPI Using Three Case Studies
We explore the trade-offs of performing linear algebra using Apache Spark,
compared to traditional C and MPI implementations on HPC platforms. Spark is
designed for data analytics on cluster computing platforms with access to local
disks and is optimized for data-parallel tasks. We examine three widely-used
and important matrix factorizations: NMF (for physical plausability), PCA (for
its ubiquity) and CX (for data interpretability). We apply these methods to
TB-sized problems in particle physics, climate modeling and bioimaging. The
data matrices are tall-and-skinny which enable the algorithms to map
conveniently into Spark's data-parallel model. We perform scaling experiments
on up to 1600 Cray XC40 nodes, describe the sources of slowdowns, and provide
tuning guidance to obtain high performance
Accelerating Large-Scale Data Analysis by Offloading to High-Performance Computing Libraries using Alchemist
Apache Spark is a popular system aimed at the analysis of large data sets,
but recent studies have shown that certain computations---in particular, many
linear algebra computations that are the basis for solving common machine
learning problems---are significantly slower in Spark than when done using
libraries written in a high-performance computing framework such as the
Message-Passing Interface (MPI).
To remedy this, we introduce Alchemist, a system designed to call MPI-based
libraries from Apache Spark. Using Alchemist with Spark helps accelerate linear
algebra, machine learning, and related computations, while still retaining the
benefits of working within the Spark environment. We discuss the motivation
behind the development of Alchemist, and we provide a brief overview of its
design and implementation.
We also compare the performances of pure Spark implementations with those of
Spark implementations that leverage MPI-based codes via Alchemist. To do so, we
use data science case studies: a large-scale application of the conjugate
gradient method to solve very large linear systems arising in a speech
classification problem, where we see an improvement of an order of magnitude;
and the truncated singular value decomposition (SVD) of a 400GB
three-dimensional ocean temperature data set, where we see a speedup of up to
7.9x. We also illustrate that the truncated SVD computation is easily scalable
to terabyte-sized data by applying it to data sets of sizes up to 17.6TB.Comment: Accepted for publication in Proceedings of the 24th ACM SIGKDD
International Conference on Knowledge Discovery and Data Mining, London, UK,
201
- …